The following tutorial is a step-by-step demo of the methods presented in our paper [1]. You may download the code and datasets from the supplementary materials to experiment with it by yourselves.

1 Before we start

Please make sure the following libraries are installed on your computer, as well as OpenBUGS –

library(R2OpenBUGS)
library(dplyr)
library(doParallel)
library(doSNOW)
library(tictoc)
library(tibble)
library(multinma)
library(tidyr)
library(ggplot2)
library(broom)
library(broom.mixed)
library(reshape2)
library(lemon)
library(kableExtra)
library(purrr)
library(plotly)
library(shiny)

Setting the working directory and sourcing the custom functions –

#Setting working directory to current one
setwd(dirname(rstudioapi::getActiveDocumentContext()$path))

#Sourcing functions
source('NMI_Functions.R')

2 The data

The following example is one instance of a random dataset – one of thousands that were used in the simulation study in our paper. Here, we consider an evidence network consisting of four treatments, denoted \(A\), \(B\), \(C\) and \(D\), across seven studies: six – three \(A\)\(B\) and three \(A\)\(C\) trials – reported at the aggregate-level, and one \(A\)\(D\) trial reported at the individual patient-level. The data were generated using the following non-shared effect modification logistic model - \[ \log\frac{p_i}{1-p_i} = \varepsilon_{i} + \begin{cases} \begin{array}{ll} -1.39 + 0.69T_i + x_{1i}T_i, &\text{$A$-$B$ trials},\\[.5em] -1.39 + T_i + 1.61x_{1i}T_i + x_{2i}T_i,&\text{$A$-$C$ trials},\\[.5em] -1.39 + 1.5T_i - 1.2x_{1i}T_i - x_{2i}T_i,&\text{$A$-$D$ trial}, \end{array} \end{cases} \] where \(x_1\) and \(x_2\) denote two binary effect modifiers, \(\varepsilon_{i}\sim\mathcal{N}\left(0,0.2^2\right)\) and \[T_i = \begin{cases} \begin{array}{ll} 0,& \text{the $i^{\mathrm{th}}$ patient received treatment $A$},\\[1em] 1,& \text{otherwise}. \end{array} \end{cases}\]

2.1 Reading the various datasets

The goal of this tutorial is to demonstrate the different data requirements for the various indirect treatment comparison (ITC) methods, fit them and compare the results. In the example that will follow, the different aggregate-level datasets (AgD) were all derived from the same simulated data, and are therefore comparable. Next, we will read and display each dataset.

2.1.1 Individual patient-level data

Starting with the IPD study –

#reading the data
IPD = read.csv('Example_IPD.csv') #IPD (for all methods)
head(IPD, 10)
##    x1 x2 Tr Y Study TrtClass
## 1   1  0  D 0     7      Trt
## 2   0  1  D 1     7      Trt
## 3   1  1  D 0     7      Trt
## 4   0  0  A 0     7     Ctrl
## 5   1  0  A 1     7     Ctrl
## 6   0  0  D 0     7      Trt
## 7   1  0  A 0     7     Ctrl
## 8   0  1  A 0     7     Ctrl
## 9   1  1  D 0     7      Trt
## 10  0  0  A 0     7     Ctrl

The final \(\mathtt{TrtClass}\) column, containing the different treatment classes (in this case just \(\mathtt{Trt}\) and \(\mathtt{Ctrl}\)), is essential for the running of ML-NMR later.

It may also be interesting to estimate the Pearson correlation between the effect modifiers, as it plays an important role in NMI –

#estimating the Pearson correlation between x1 and x2
(rho_hat = cor(IPD[grep('x', colnames(IPD))]))
##           x1        x2
## x1 1.0000000 0.2102674
## x2 0.2102674 1.0000000

2.1.2 Aggregate-level NMA/NMR data

This is the typical AgD format for standard network meta-analysis and network meta-regression [2] [3] [4]

(NMR_AgD = read.csv('Example_AgD_NMR_NMA.csv')) #AgD for NMR/NMA
##   Study Trt1 Trt2   n    x1    x2    TE    se
## 1     1    A    B 600 0.628 0.457 1.387 0.188
## 2     2    A    B 600 0.843 0.618 1.475 0.183
## 3     3    A    B 600 0.787 0.592 1.612 0.187
## 4     4    A    C 600 0.733 0.503 2.757 0.204
## 5     5    A    C 600 0.705 0.468 2.526 0.198
## 6     6    A    C 600 0.600 0.405 2.134 0.191
## 7     7    A    D 600 0.533 0.303 0.667 0.187

The standard NMA will ignore the \(\mathtt{x_1}\) and \(\mathtt{x_2}\) columns, while NMR will use them as regressors.

2.1.3 NMI-format aggregate-level data

Now for the NMI aggregate-level data in its particular format –

(NMI_AgD = read.csv('Example_AgD_NMI.csv')) #AgD for NMI
##    Study Trt1 Trt2   n    x1    x2    TE    se
## 1      1    A    B 600 0.628 0.457 1.387 0.188
## 2      1    A    B         1       1.829 0.242
## 3      1    A    B         0       0.492 0.319
## 4      1    A    B               1 1.363 0.267
## 5      1    A    B               0 1.464 0.270
## 6      2    A    B 600 0.843 0.618 1.475 0.183
## 7      2    A    B         1       1.577 0.198
## 8      2    A    B         0       0.946 0.525
## 9      2    A    B               1 1.517 0.232
## 10     2    A    B               0 1.412 0.298
## 11     3    A    B 600 0.787 0.592 1.612 0.187
## 12     3    A    B         1       1.858 0.215
## 13     3    A    B         0       0.643 0.398
## 14     3    A    B               1 1.662 0.241
## 15     3    A    B               0 1.542 0.296
## 16     4    A    C 600 0.733 0.503 2.757 0.204
## 17     4    A    C         1       3.415 0.270
## 18     4    A    C         0       1.879 0.392
## 19     4    A    C               1 3.135 0.308
## 20     4    A    C               0 2.515 0.286
## 21     5    A    C 600 0.705 0.468 2.526 0.198
## 22     5    A    C         1       3.412 0.272
## 23     5    A    C         0       1.038 0.329
## 24     5    A    C               1 3.264 0.325
## 25     5    A    C               0 2.033 0.258
## 26     6    A    C 600   0.6 0.405 2.134 0.191
## 27     6    A    C         1       3.139 0.279
## 28     6    A    C         0       1.056 0.287
## 29     6    A    C               1 3.768 0.380
## 30     6    A    C               0 1.307 0.233

Note that every five rows in the above table refer to a single study, with the first row representing the study-level outcomes and the following four ar subgroup analyses. Later on, we will impute this table.

2.1.4 Multilevel NMR-format aggregate-level data

Finally, the multilevel NMR (ML-NMR) [5] aggregate-level data –

(ML_NMR_AgD = read.csv('Example_AgD_ML_NMR.csv'))  #AgD for ML-NMR
##    Study Tr   n   r        X1        X2 TrtClass
## 1      1  A 318  58 0.5974843 0.4716981     Ctrl
## 2      1  B 282 133 0.6631206 0.4397163      Trt
## 3      2  A 317  65 0.8422713 0.6246057     Ctrl
## 4      2  B 283 150 0.8445230 0.6113074      Trt
## 5      3  A 292  57 0.7671233 0.5890411     Ctrl
## 6      3  B 308 169 0.8051948 0.5941558      Trt
## 7      4  A 301  56 0.7508306 0.4950166     Ctrl
## 8      4  C 299 234 0.7157191 0.5117057      Trt
## 9      5  A 280  58 0.6821429 0.4607143     Ctrl
## 10     5  C 320 245 0.7250000 0.4750000      Trt
## 11     6  A 293  57 0.6382253 0.3924915     Ctrl
## 12     6  C 307 206 0.5635179 0.4169381      Trt

Unlike the previous AgDs, this one is required to be provided as raw counts (for a dichotomous outcome) at the arm level, and not as (pre-calculated) relative treatment effects and standard errors.

2.1.5 Visualizing the evidence network

The multinma package [6] makes plotting the network very convenient –

ML_NMR_data = list(IPD = IPD, AgD = ML_NMR_AgD)

net = multinma::combine_network(
  set_ipd(ML_NMR_data$IPD %>%
            mutate(TrC = Tr),
          study = Study,
          trt = Tr,
          trt_class = TrtClass,
          r = Y),
  set_agd_arm(ML_NMR_data$AgD %>%
                mutate(TrC = Tr),
              study = Study,
              trt = Tr,
              trt_class = TrtClass,
              r = r,
              n = n),
  trt_ref = "A")


par(mar = c(0,0,0,0))
plot(net, weight_edges = TRUE, weight_nodes = TRUE)

3 Fitting the different models

Now that all datasets have been read and presented, we may proceed to fit the different models.

3.1 Setting values to all inputs

First, there is a moderately-long list of inputs that need be supplied by the user for the program to run.

Starting with MCMC-related inputs –

N_chains = 3 #number of MCMC chains
N_iter = 1500 #number of MCMC iterations (in total)
burnin = 510 #number of MCMC samples to discard
n_int = 500 #number of MC integration points for ML-NMR

These are the column names the NMI functions require –

#AgD and IPD effect modifier column names
AgD_EM_cols = IPD_EM_cols = c('x1', 'x2')
Study_col = 'Study' #study column name
Trt_cols = c('Trt1', 'Trt2') #AgD treatment column names
TE_col = 'TE' #AgD treatment effect estimate column name
SE_col = 'se' #AgD standard error column name
IPD_Trt_col = 'Tr' #IPD treatment column name
outcome_col = 'Y' #IPD outcome column name

A vector of sample sizes for the AgD studies (by order of appearance) –

samp_sizes = rep(600, 6) #sample sizes for AgD studies

These are used in the calculation of the Best Linear Unbiased Predictor (BLUP) imputation, in the first step of NMI.

And, finally, the effect modifier levels at which we wish to perform ITC –

#value of effect modifier x1 to be used in ITC
x1 = .675
#value of effect modifier x2 to be used in ITC
x2 = .475

x_vect = c(x1, x2)

NMI uses the imputed AgD to solve two linear systems, in order to interpolate the treatment effect estimates and standard errors at \(\mathtt{\text{x_vect}}\) for all studies.

3.2 Running NMI

This single command performs the first two NMI steps (imputation and interpolation) –

#Imputing the AgD
NMI_object = NMI_interpolation(IPD, NMI_AgD, x_vect, AgD_EM_cols, IPD_EM_cols, 
                               Study_col, samp_sizes, Trt_cols, TE_col, SE_col, 
                               IPD_Trt_col) 

3.2.1 The imputed aggregate-level data

This is the result of the first NMI step. Here, the last five rows comprise the IPD analyses.

#Have a look at the imputed dataset
NMI_object$Imputed
##    Study Trt1 Trt2    x1    x2     TE    se
## 1      1    A    B 0.628 0.457  1.387 0.188
## 2      1    A    B 1.000 0.538  1.829 0.242
## 3      1    A    B 0.000 0.321  0.492 0.319
## 4      1    A    B 0.739 1.000  1.363 0.267
## 5      1    A    B 0.535 0.000  1.464 0.270
## 6      2    A    B 0.843 0.618  1.475 0.183
## 7      2    A    B 1.000 0.662  1.577 0.198
## 8      2    A    B 0.000 0.381  0.946 0.525
## 9      2    A    B 0.903 1.000  1.517 0.232
## 10     2    A    B 0.746 0.000  1.412 0.298
## 11     3    A    B 0.787 0.592  1.612 0.187
## 12     3    A    B 1.000 0.646  1.858 0.215
## 13     3    A    B 0.000 0.393  0.643 0.398
## 14     3    A    B 0.858 1.000  1.662 0.241
## 15     3    A    B 0.683 0.000  1.542 0.296
## 16     4    A    C 0.733 0.503  2.757 0.204
## 17     4    A    C 1.000 0.566  3.415 0.270
## 18     4    A    C 0.000 0.329  1.879 0.392
## 19     4    A    C 0.825 1.000  3.135 0.308
## 20     4    A    C 0.639 0.000  2.515 0.286
## 21     5    A    C 0.705 0.468  2.526 0.198
## 22     5    A    C 1.000 0.536  3.412 0.272
## 23     5    A    C 0.000 0.306  1.038 0.329
## 24     5    A    C 0.807 1.000  3.264 0.325
## 25     5    A    C 0.615 0.000  2.033 0.258
## 26     6    A    C 0.600 0.405  2.134 0.191
## 27     6    A    C 1.000 0.489  3.139 0.279
## 28     6    A    C 0.000 0.279  1.056 0.287
## 29     6    A    C 0.725 1.000  3.768 0.380
## 30     6    A    C 0.515 0.000  1.307 0.233
## 31     7    A    D 0.533 0.303  0.667 0.187
## 32     7    A    D 1.000 0.394  0.008 0.278
## 33     7    A    D 0.000 0.200  1.305 0.266
## 34     7    A    D 0.692 1.000 -0.286 0.374
## 35     7    A    D 0.464 0.000  0.997 0.223

Compare the imputed NMI AgD with the original table in the data section.

3.2.2 The interpolated treatment effect estimates and standard errors

Here are the interpolated TEs and SEs, all at \(x_1 = 0.675\) and \(x_2 = 0.475\)

#The data submitted to NMA for ITC
NMI_object$Final
##   Study Trt1 Trt2    x1    x2    TE    se
## 1     1    A    B 0.675 0.475 1.439 0.189
## 2     2    A    B 0.675 0.475 1.370 0.216
## 3     3    A    B 0.675 0.475 1.480 0.196
## 4     4    A    C 0.675 0.475 2.786 0.200
## 5     5    A    C 0.675 0.475 2.571 0.195
## 6     6    A    C 0.675 0.475 2.540 0.209
## 7     7    A    D 0.675 0.475 0.289 0.221

3.2.3 Assessing goodness of (in sample) interpolation

We can visually inspect the goodness of the interpolation by plotting the originally observed treatment effect estimates against their interpolated counterparts (at the same \(x_1\) and \(x_2\) values) –

NMI_diagnostic_plotly(NMI_object)

In case one wonders about the goodness of the standard error interpolation: that system is an underdetermined one, and as such, interpolates the original “in-sample” observations without any error (meaning all point in an equivalent plot will align perfectly along the \(y=x\) line).

3.2.4 Fitting standard aggregate-level NMA to the adjusted outcomes

To conclude NMI, the final table of the previous section is submitted to standard.

#Model fitting
NMI_sim = NMA_run(NMI_object$Final, N_chains, N_iter, burnin) 
#Summarising results
NMI_summary = NMA_NMI_summary(NMI_sim) 
#TEs and CrIs
NMI_results = result_table(NMI_summary) 

The results are stored and will eventually be displayed alongside those of the remaining methods.

3.3 Fitting standard aggregate-level NMA

#*#Model fitting
NMA_sim = NMA_run(NMR_AgD, N_chains, N_iter, burnin) 
#Summarising results
NMA_summary = NMA_NMI_summary(NMA_sim) 
#TEs and CrIs
NMA_results = result_table(NMA_summary) 

3.4 Fitting network meta-regression (NMR)

#*#Model fitting
NMR_sim = NMA_Meta_Reg_run_2D(NMR_AgD, N_chains, N_iter, burnin) 
#Summarising results
NMR_summary = NMA_Metareg_summary_2D(NMR_sim, x_vect)
#TEs and CrIs
NMR_results = result_table(NMR_summary) 

3.5 Fitting multi-level network meta-regression (ML-NMR)

#Model fitting
ML_NMR_sim = ML_NMR_Run_2D(ML_NMR_data, N_iter, N_chains, burnin, n_int) 
#summarising results
ML_NMR_summ = ML_NMR_summary_2D(n_trts = 4, ML_NMR_sim, x_vect) 
#TEs and CrIs
ML_NMR_results = result_table(ML_NMR_summ) 

4 Comparing performance

Let us now compare the results of the various methods, relative to true parameters used in the data generation.

4.1 Calculating the true treatment effects for reference

We start by calculating the true (relative) treatment effects from the logistic model parameters -

#These are the logistic regression coefficients that were used to generate the 
#data used in this demonstration 
beta_AB = c(-1.39, 0, 0, 0.69, 1.00, 0.00)
beta_AC = c(-1.39, 0, 0, 1.00, 1.61, 1.00)
beta_AD = c(-1.39, 0, 0, 1.50, -1.20, -1.00)

#converting the logistic regression coefficients into treatment effects
d = rbind(beta_AB, beta_AC, beta_AD)[,4:6]
trt_effs = d%*%c(1, x1, x2)
(trt_effs = c(trt_effs, trt_effs[c(2,3,3)] - trt_effs[c(1,1,2)]))
## [1]  1.36500  2.56175  0.21500  1.19675 -1.15000 -2.34675

4.2 Comparing the different methods

We can now put together everything we have done.

4.2.1 Comparison table

display_result_table(NMA_results, NMR_results, ML_NMR_results, NMI_results, trt_effs)
NMA NMR ML-NMR NMI True Trt. Eff.
\(d_{\mathrm{AB}}\) 1.49 [1.28 ;1.70] 1.46 [1.17 ;1.73] 1.54 [1.31 ;1.77] 1.43 [1.21 ;1.65] 1.36
\(d_{\mathrm{AC}}\) 2.46 [2.24 ;2.68] 2.38 [2.12 ;2.61] 2.65 [2.41 ;2.92] 2.63 [2.41 ;2.86] 2.56
\(d_{\mathrm{AD}}\) 0.67 [0.30 ;1.02] 0.61 [0.07 ;1.19] 0.20 [-0.21 ;0.66] 0.29 [-0.15 ;0.70] 0.21
\(d_{\mathrm{BC}}\) 0.96 [0.66 ;1.29] 0.92 [0.52 ;1.33] 1.12 [0.79 ;1.48] 1.20 [0.88 ;1.54] 1.20
\(d_{\mathrm{BD}}\) -0.83 [-1.26 ;-0.41] -0.86 [-1.56 ;-0.07] -1.33 [-1.85 ;-0.81] -1.15 [-1.64 ;-0.67] -1.15
\(d_{\mathrm{CD}}\) -1.79 [-2.22 ;-1.39] -1.76 [-2.30 ;-1.18] -2.45 [-3.01 ;-1.90] -2.35 [-2.84 ;-1.89] -2.35

4.2.2 Graphical comparison

result_forest_plot(NMA_summary, NMR_summary, ML_NMR_summ, NMI_summary, trt_effs)
## Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
## ℹ Please use `linewidth` instead.
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
## generated.

In this example, NMI did extremely well in terms of estimation accuracy (especially in the case of the indirect comparisons, namely: \(d_{\mathrm{BC}}\), \(d_{\mathrm{BD}}\) and \(d_{\mathrm{CD}}\)). Note that while this tutorial compares various ITC methods on a single random dataset, the paper itself contains results from many thousands of simulated datasets. We strongly encourage you to give it a read before drawing any conclusions.

5 Conclusion

NMI is an attractive option when subgroup analyses for all effect modifiers are available. The extra work put toward data extraction, more than pays itself off when the data is driven by a non-shared effect modification data generating mechanism.

6 References

1. Harari O, Soltanifar M, Cappelleri JC, Verhoek A, Ouwens M, Daly C, et al. Network meta-interpolation: Effect modification adjustment in network meta-analysis using subgroup analyses. Research Synthesis Methods. 2022. https://doi.org/https://doi.org/10.1002/jrsm.1608.
2. Dias AE S.and Ades, Welton NJ, Jansen JP, Sutton AJ. Network meta analysis for decision making. John Wiley & Sons Ltd; 2018.
3. Dias S, Welton NJ, Sutton AJ, Ades AE. NICE decision support unit technical support documents. In NICE DSU technical support document 2: A generalised linear modelling framework for pairwise and network meta-analysis of randomised controlled trials. National Institute for Health; Clinical Excellence; 2014.
4. Dias S, Sutton AJ, Welton NJ, Ades AE. NICE decision support unit technical support documents. In NICE DSU technical support document 3: Heterogeneity: Subgroups, meta-regression, bias and bias-adjustment. National Institute for Health; Clinical Excellence; 2011.
5. Phillippo DM, Dias S, Ades AE, Belger M, Brnabic A, Schacht A, et al. Multilevel network meta-regression for population-adjusted treatment comparisons. J R Stat Soc Ser A Stat Soc. 2020;183:1189–210.
6. Phillippo DM. Multinma: Network meta-analysis of individual and aggregate data in stan. 2021.